翻訳と辞書
Words near each other
・ VDEC
・ VDF
・ VDG
・ VDGS
・ VDH
・ VDI
・ Vdio
・ VDIworks
・ VDJ
・ VDK
・ VDK Gent Dames
・ VDL
・ VDL Berkhof
・ VDL Bova
・ VC D440 Battalion
VC dimension
・ VC Dynamo Moscow
・ VC Dynamo-Yantar Kaliningrad
・ VC Euphony Asse-Lennik
・ VC Iskra Odintsovo
・ VC Leipzig
・ VC Lokomotiv Novosibirsk
・ VC Minyor Pernik
・ VC Oudegem
・ VC Shakhtar Donetsk
・ VC Universitet-Tekhnolog Belgorod
・ VC Uralochka-NTMK Yekaterinburg
・ VC Vlissingen
・ VC Zarechie Odintsovo
・ VC Zellik


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

VC dimension : ウィキペディア英語版
VC dimension
In statistical learning theory, or sometimes computational learning theory, the VC dimension (for Vapnik–Chervonenkis dimension) is a measure of the capacity (complexity, expressive power, richness, or flexibility) of a statistical classification algorithm, defined as the cardinality of the largest set of points that the algorithm can shatter. It is a core concept in Vapnik–Chervonenkis theory, and was originally defined by Vladimir Vapnik and Alexey Chervonenkis.
Informally, the capacity of a classification model is related to how complicated it can be. For example, consider the thresholding of a high-degree polynomial: if the polynomial evaluates above zero, that point is classified as positive, otherwise as negative. A high-degree polynomial can be wiggly, so it can fit a given set of training points well. But one can expect that the classifier will make errors on other points, because it is too wiggly. Such a polynomial has a high capacity. A much simpler alternative is to threshold a linear function. This function may not fit the training set well, because it has a low capacity. This notion of capacity is made rigorous below.
== Shattering ==

A classification model f with some parameter vector \theta is said to ''shatter'' a set of data points (x_1,x_2,\ldots,x_n) if, for all assignments of labels to those points, there exists a \theta such that the model f makes no errors when evaluating that set of data points.
The VC dimension of a model f is the maximum number of points that can be arranged so that f shatters them. More formally, it is h' where h' is the maximum h such that some data point set of cardinality h can be shattered by f.
For example, consider a straight line as the classification model: the model used by a perceptron. The line should separate positive data points from negative data points. There exist sets of 3 points that can indeed be shattered using this model (any 3 points that are not collinear can be shattered). However, no set of 4 points can be shattered: by Radon's theorem, any four points can be partitioned into two subsets with intersecting convex hulls, so it is not possible to separate one of these two subsets from the other. Thus, the VC dimension of this particular classifier is 3. It is important to remember that while one can choose any arrangement of points, the arrangement of those points cannot change when attempting to shatter for some label assignment. Note, only 3 of the 23 = 8 possible label assignments are shown for the three points.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「VC dimension」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.